0
Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.

ÀΰøÈ£Èí±â »ç¿ë ȯÀڵ鿡°Ô Á¦°øµÈ ¿¹ºñÀû Á¤º¸¿¡ ´ëÇÑ ³»¿ëºÐ¼®ÀÇ ÃøÁ¤ÀÚ°£ ½Å·Úµµ

Interrater Reliability in the Content Analysis of Preparatory Information for Mechanically Ventilated Patients

±âº»°£È£ÇÐȸÁö 1998³â 5±Ç 2È£ p.269 ~ 279
KMID : 0388319980050020269
±èÈ­¼ø/Hwa Soon Kim

Abstract

°á ·Ð
º» ³í¹®¿¡¼­´Â °üÂûÀÚ ¶Ç´Â ÃøÁ¤ÀÚ°¡ ÃøÁ¤¿À·ùÀÇ ÁÖµÈ ¿äÀÎÀ¸·Î ÀÛ¿ëÇÒ ¼ö ÀÖ´Â ³»¿ëºÐ¼®
À̳ª »óÈ£ÀÛ¿ë ¿¬±¸¿¡¼­ ÃøÁ¤ÀÚ°£ ½Å·ÚµµÀÇ »êÃâ°ú Çؼ®¿¡ ´ëÇØ ¼³¸íÇÏ¿´´Ù. ÃøÁ¤ÀÚ°£ ½Å·Ú
µµ¸¦ »êÃâÇϴ ù ´Ü°è¿¡¼­ ¿¬±¸ÀÚ°£ ºÐ¼®´ÜÀ§ È®ÀÎ ½Å·ÚµµÀÇ »êÃâÀº Çؼ®Àû ½Å·Úµµ »êÃâÀ»
À§ÇÑ ÀüÁ¦Á¶°ÇÀÌ µÉ ¼ö ÀÖ´Ù. ƯÈ÷ ´ë»óÀÚµéÀÇ ÇàÀ§¸¦ È®ÀÎÇϱâ À§ÇØ °üÂûÀڵ鿡°Ô ¿ä±¸µÇ
´Â Ãß·ÐÀÇ Á¤µµ°¡ Ŭ¼ö·Ï ÀÌ·¯ÇÑ ÀüÁ¦Á¶°ÇÀº ¹Ýµå½Ã ÃæÁ·µÇ¾îÁ®¾ß ÇÒ °ÍÀÌ´Ù.
Çؼ®Àû ½Å·Úµµ¸¦ »êÃâÇÏ´Â ´Ü°è¿¡¼­´Â kappa°¡ ³Î¸® »ç¿ëµÇ°í ÀÖÀ¸³ª kappa°¡ ¸ðµç »óȲ
¿¡¼­ ¸¸Á·ÇÏ°Ô »ç¿ëµÉ ¼ö ÀÖ´Â ÁöÇ¥´Â ¾Æ´Ï´Ù. ƯÈ÷ °üÂûÀÚÀÇ Æí°ß °³ÀÔÀÌ Å©°Å³ª ¹üÁÖº°
»ó´ëÀû ¹ßÇö ºóµµ¿¡¼­ Â÷ÀÌ°¡ Å« °æ¿ì Cohen's kappaÀÇ »ç¿ëÀÌ ºÎÀûÀýÇÒ ¼öµµ ÀÖ´Ù. ¶§·Î
´Â kappaÀÇ Çؼ®¿¡ ÁÖÀÇ°¡ ÇÊ¿äÇÏ´Ù. ÃøÁ¤ÀÚ°£ ½Å·Úµµ °ËÁõ¿¡ kappaÀÇ »ç¿ëÀÌ ÀûÀýÇÏÁö ¸ø
ÇÑ °æ¿ì ¸î°¡Áö ´ë¾ÈÀ» »ç¿ëÇÒ ¼ö ÀÖ´Ù. ù°·Î´Â °üÂûÀÚÀÇ Æí°ß °³ÀÔÀÌ Å« °æ¿ì¿¡´Â °üÂû
ÀÚ°£ÀÇ ÀÏÄ¡µµ¸¦ »êÃâÇÏ´Â ¹æ¹ýÀ¸·Î Tetrachoric correlationÀ» »ç¿ëÇÒ ¼ö ÀÖ´Ù. Tetrachoric
correlationÀº ¹üÁÖµéÀÇ °¡·Î ¼¼·Î °¡ÀåÀÚ¸® ÇÕÀÇ ÀÏÄ¡¾ç»ó¿¡ ÀÇÇØ Å©°Ô ¿µÇâÀ» ¹ÞÁö ¾Ê´Â´Ù
(Hutchinson, 1993). µÑ°·Î´Â ¹üÁÖº° ºÐÆ÷(revalence)°¡ ÀÏÄ¡µµ¿¡ ¹ÌÄ£ ¿µÇâÀ» ½ÇÁúÀûÀ¸·Î
Æò°¡Çϱâ À§ÇÏ¿© Cicchetti & Feinstein(1990)ÀÌ Á¦½ÃÇÑ ±àÁ¤ÀÏÄ¡(positive agreement=pros)
¿Í ºÎÁ¤ÀÏÄ¡(negative agreement=pneg) ÁöÇ¥¸¦ »ç¿ëÇÒ ¼öµµ ÀÖ´Ù. ¸¶Áö¸·À¸·Î´Â prevalence
¿¡¼­ Â÷ÀÌ¿Í Æí°ß°³ÀÔÀÇ ¿µÇâÀ» ±³Á¤ÇÑ prevalence-adjusted, bias-adjusted kappaÀÇ »ç¿ëÀ»
Á¦¾ÈÇÏ°í ÀÖ´Ù(Byrt et al., 1993) ƯÈ÷ ÀÏÄ¡µµ ¿¬±¸µé °£ÀÇ ÀǹÌÀÖ´Â ºñ±³¸¦ À§Çؼ­´Â ±³Á¤
µÈ ÃøÁ¤Ä¡¸¦ ÇÔ²² º¸°íÇÒ ÇÊ¿ä°¡ ÀÖ´Ù(Byrt et al., 1993). »Ó¸¸ ¾Æ´Ï¶ó, kappaÀÇ Á¤È®ÇÑ ÇØ
¼®À» À§Çؼ­´Â kappa¿Í ÇÔ²² ¹Ýµå½Ã ÀÏÄ¡µµ(percentage agreement)¸¦ °°ÀÌ º¸°íÇÏ¿©¾ß ÇÑ
´Ù.
´ëºÎºÐÀÇ ¿¬±¸µéÀº Çؼ®Àû ½Å·ÚµµÀÇ »êÃâ¿¡¼­ Àüü ½Å·Úµµ ÃßÁ¤Ä¡¸¸À» º¸°íÇÏ°í ÀÖ´Ù. ±×
·¯³ª Àüü ½Å·Úµµ(global reliability)´Â °üÂûÀÚ°¡ °¢°¢ÀÇ ¹üÁÖ¸¦ »ç¿ëÇÔ¿¡ À־ ¾ó¸¶¸¸Å­ÀÇ
ÀÏ°ü¼ºÀ» °¡Á³´ÂÁö¿¡ ´ëÇؼ­´Â ³ªÅ¸³»Áö ¸øÇÑ´Ù. ±×·¯¹Ç·Î ¹üÁÖº° ½Å·Úµµ ÃßÁ¤Ä¡¸¦ Á¦½ÃÇÏ
¿© °¢ ¹üÁÖ¸¦ »ç¿ëÇϴµ¥ À־ °üÂûÀÚµéÀÌ ¾ó¸¶³ª ÀÏ°ü¼ºÀ» À¯ÁöÇÏ¿´´ÂÁö¸¦ º¸¿©ÁÙ ÇÊ¿ä
°¡ ÀÖ´Ù.
ÆÇÁ¤ °á°ú°¡ À̺ÐÀûÀÌ ¾Æ´Ï°í ´Ù¿øÀû(polytomous)ÀÎ ¸í¸ñÀÚ·áÀÎ °æ¿ì ¹üÁÖµéÀÇ Æ¯¼ºÀ» °í
·ÁÇÏ¿© ÆíÀÇ¿¡ µû¶ó ¼ö°³ÀÇ 2ºÐÀû Á¶ÇÕÀ» ¸¸µé¾î °³°³ÀÇ kappa ÃßÁ¤Ä¡¸¦ Á¦½ÃÇÒ ¼öµµ ÀÖ´Ù
(¾È & À¯, 1996).
#ÃÊ·Ï#
In nursing research that the data is collected through clinical observation, analysis of
clinical recording or coding of interpersonal interaction in clinical areas, testing and
reporting interrater reliability is very important to assure reliable results. Procedures for
interrater reliability in these studies should follow two steps. The first step is to
determine unitizing reliability, which is defined as consistency In the identification of
same data elements in the record by two or more raters reviewing the same record.
Unitizing reliability have been rarely reported in previous studios. Unitizing reliability
should be tested before progressing to the next step as precondition.
Next step is to determine Interpretive reliability Cohen's kappa is a preferable method
of calculating the extent of agree mint between observer or Judges because it provides
beyond-chance agreement. Despite its usefulness, kappa can sometimes present
paradoxical conclusions and can be difficult to interpret. These difficulties result from
the feature of kappa which is affected in complex ways by the presence of bias between
observers and by true prevalence of certain categories. Therefore, percentage agreement
should be reported with kappa for adequate interpretation of kappa. The presence of bias
should be assessed using the alas index and the effect of prevalence should be assessed
using the prevalence index.
Researchers have been reported only global reliability reflecting the extent to which
coders can consistently use the whole coding system across all categories.
Category-by-category reliability also need to be reported to inform the possibility that
some categories are harder to use than others.

KeyWords
³»¿ëºÐ¼®, ¿¹ºñÀû Á¤º¸, ÃøÁ¤ÀÚ°£ ½Å·Úµµ, Cohen's kappa, Content Analysis, Preparatory Information, Interrater Reliability, Cohen's kappa,
¿ø¹® ¹× ¸µÅ©¾Æ¿ô Á¤º¸
 
µîÀçÀú³Î Á¤º¸
ÇмúÁøÈïÀç´Ü(KCI) KoreaMed